Table of Contents
The goal of this project is to carry out a thorough comparison between three distinct transfer learning models - VGG16, InceptionV3, and ResNet50 and accuracy has been used as a metric to measure the models performance
$\textrm{Accuracy} = \frac{\textrm{Number of correclty predicted images}}{\textrm{Total number of tested images}} \times 100\%$
Final results look as follows:
| Set | Accuracy |
|---|---|
| Validation Set* | ~92% |
| Test Set* | ~92% |
*
validation set - is the set used during the model training to adjust the hyperparameters. test set - is the set which is used to evaluate model's performanceThe image data that was used for this problem is Brain MRI Images for Brain Tumor Detection. It conists of MRI scans of two classes:
NO - no tumor, encoded as 0YES - tumor, encoded as 1Unfortunately, the data set description doesn't hold any information where this MRI scans come from..
A brain tumor occurs when abnormal cells form within the brain. There are two main types of tumors: cancerous (malignant) tumors and benign tumors. Cancerous tumors can be divided into primary tumors, which start within the brain, and secondary tumors, which have spread from elsewhere, known as brain metastasis tumors. All types of brain tumors may produce symptoms that vary depending on the part of the brain involved. These symptoms may include headaches, seizures, problems with vision, vomiting and mental changes. The headache is classically worse in the morning and goes away with vomiting. Other symptoms may include difficulty walking, speaking or with sensations. As the disease progresses, unconsciousness may occur.
Brain metastasis in the right cerebral hemisphere from lung cancer, shown on magnetic resonance imaging.
Source: Wikipedia
#Setting up the Environment
#Importing the clear_output function from IPython.display library
from IPython.display import clear_output
!pip install imutils
clear_output()
#Importing models and preprocessing functions from Keras applications for VGG19, Xception, InceptionV3, and ResNet50.
from keras.applications.vgg19 import VGG19,preprocess_input
from keras.applications.xception import Xception,preprocess_input
from keras.applications.inception_v3 import InceptionV3,inception_v3
from keras.applications.resnet50 import ResNet50,resnet50
Using TensorFlow backend.
/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
#Importing necessary libraries for data analysis
import numpy as np
from tqdm import tqdm
import cv2
import os
import shutil
import itertools
import imutils
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, confusion_matrix
import plotly.graph_objs as go
from plotly.offline import init_notebook_mode, iplot
from plotly import tools
#initialize ImageDataGenerator and Import VGG16 model with preprocessing function from keras applications
from keras.preprocessing.image import ImageDataGenerator
from keras.applications.vgg16 import VGG16, preprocess_input
from keras import layers
from keras.models import Model, Sequential
from keras.optimizers import Adam, RMSprop
from keras.callbacks import EarlyStopping
init_notebook_mode(connected=True)
RANDOM_SEED = 123
The images are currently in one folder with yes and no subfolders. The data will be split further into 3 folders train, val and test folders which makes it more organised and easier to retrive the images for subsequent analysis. The new folder heirarchy will look as follows:
!apt-get install tree
#clear_output()
# create new folders
!mkdir TRAIN TEST VAL TRAIN/YES TRAIN/NO TEST/YES TEST/NO VAL/YES VAL/NO
!tree -d
The following NEW packages will be installed:
tree
0 upgraded, 1 newly installed, 0 to remove and 12 not upgraded.
Need to get 46.1 kB of archives.
After this operation, 106 kB of additional disk space will be used.
Get:1 http://deb.debian.org/debian stretch/main amd64 tree amd64 1.7.0-5 [46.1 kB]
Fetched 46.1 kB in 0s (1382 kB/s)
debconf: delaying package configuration, since apt-utils is not installed
Selecting previously unselected package tree.
(Reading database ... 59251 files and directories currently installed.)
Preparing to unpack .../tree_1.7.0-5_amd64.deb ...
Unpacking tree (1.7.0-5) ...
Setting up tree (1.7.0-5) ...
.
├── TEST
│ ├── NO
│ └── YES
├── TRAIN
│ ├── NO
│ └── YES
└── VAL
├── NO
└── YES
9 directories
#Setting the path to the folder containing the brain MRI images for tumour classification.
IMG_PATH = '../input/brain-mri-images-for-brain-tumor-detection/brain_tumor_dataset/'
# split the data by train/val/test
for CLASS in os.listdir(IMG_PATH):
if not CLASS.startswith('.'):
IMG_NUM = len(os.listdir(IMG_PATH + CLASS))
for (n, FILE_NAME) in enumerate(os.listdir(IMG_PATH + CLASS)):
img = IMG_PATH + CLASS + '/' + FILE_NAME
#Copying the first five images of the class to the TEST folder.
if n < 5:
shutil.copy(img, 'TEST/' + CLASS.upper() + '/' + FILE_NAME)
#Copying the remaining images (excluding the first five and last 20%) to the TRAIN folder.
elif n < 0.8*IMG_NUM:
shutil.copy(img, 'TRAIN/'+ CLASS.upper() + '/' + FILE_NAME)
#Copying the last 20% of images to the VAL folder.
else:
shutil.copy(img, 'VAL/'+ CLASS.upper() + '/' + FILE_NAME)
def load_data(dir_path, img_size=(100,100)):
"""
Load resized images as np.arrays with the corresponding labels
"""
X = []
y = []
i = 0
labels = dict()
for path in tqdm(sorted(os.listdir(dir_path))):
if not path.startswith('.'):
labels[i] = path
for file in os.listdir(dir_path + path):
if not file.startswith('.'):
img = cv2.imread(dir_path + path + '/' + file)
X.append(img)
y.append(i)
i += 1
X = np.array(X)
y = np.array(y)
print(f'{len(X)} images loaded from {dir_path} directory.')
return X, y, labels
#plotting a confusion matrix given the matrix and list of class names
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.figure(figsize = (6,6))
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=90)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
thresh = cm.max() / 2.
cm = np.round(cm,2)
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
#setting variables for directories of training,testing & validation data
TRAIN_DIR = 'TRAIN/'
TEST_DIR = 'TEST/'
VAL_DIR = 'VAL/'
#setting standardised image size to 224 x 224
IMG_SIZE = (224,224)
# use predefined function to load the image data into workspace
X_train, y_train, labels = load_data(TRAIN_DIR, IMG_SIZE)
X_test, y_test, _ = load_data(TEST_DIR, IMG_SIZE)
X_val, y_val, _ = load_data(VAL_DIR, IMG_SIZE)
100%|██████████| 2/2 [00:00<00:00, 4.65it/s] 100%|██████████| 2/2 [00:00<00:00, 97.31it/s] 100%|██████████| 2/2 [00:00<00:00, 21.24it/s]
193 images loaded from TRAIN/ directory. 10 images loaded from TEST/ directory. 50 images loaded from VAL/ directory.
The following code helps to observe the distribution between the training,validation and test set
#Create an empty dictionary to store the count of each class in each set
y = dict()
y[0] = []
y[1] = []
#Loop through each set (train, validation, test) and count the number of instances of each class
for set_name in (y_train, y_val, y_test):
y[0].append(np.sum(set_name == 0))
y[1].append(np.sum(set_name == 1))
#Define the bar chart trace for class 0 (no tumour)
trace0 = go.Bar(
x=['Train Set', 'Validation Set', 'Test Set'],
y=y[0],
name='No',
marker=dict(color='#33cc33'),
opacity=0.7
)
#Define the bar chart trace for class 1 (tumor)
trace1 = go.Bar(
x=['Train Set', 'Validation Set', 'Test Set'],
y=y[1],
name='Yes',
marker=dict(color='#ff3300'),
opacity=0.7
)
#Combine the traces into a list of data
data = [trace0, trace1]
#Define the layout of the chart
layout = go.Layout(
title='Count of classes in each set',
xaxis={'title': 'Set'},
yaxis={'title': 'Count'}
)
#Create the figure object and plot the chart
fig = go.Figure(data, layout)
iplot(fig)
def plot_samples(X, y, labels_dict, n=50):
"""
Creates a gridplot for desired number of images (n) from the specified set
"""
#Loop through each label index in the dictionary
for index in range(len(labels_dict)):
imgs = X[np.argwhere(y == index)][:n]
#Select the first n images with the current label.
j = 10
i = int(n/j)
plt.figure(figsize=(15,6))
#Set up the figure for displaying the images.
c = 1
for img in imgs:
plt.subplot(i,j,c)
plt.imshow(img[0])
plt.xticks([])
plt.yticks([])
c += 1
plt.suptitle('Tumor: {}'.format(labels_dict[index]))
plt.show()
plot_samples(X_train, y_train, labels, 10)
As observed from the above images, they have differences in width and height and diffent size of "black corners". Since the image size for VGG-16 imput layer is (224,224) some wide images may look weird after resizing. Histogram of ratio distributions (ratio = width/height):
RATIO_LIST = []
for set in (X_train, X_test, X_val):
for img in set:
RATIO_LIST.append(img.shape[1]/img.shape[0])
plt.hist(RATIO_LIST)
plt.title('Distribution of Image Ratios')
plt.xlabel('Ratio Value')
plt.ylabel('Count')
plt.show()
The first step of "normalization" would be to crop the brain out of the images. we can use this technique which was perfectly described in pyimagesearch blog
def crop_imgs(set_name, add_pixels_value=0):
"""
Finds the extreme points on the image and crops the rectangular out of them
"""
set_new = []
for img in set_name:
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
gray = cv2.GaussianBlur(gray, (5, 5), 0)
# threshold the image, then perform a series of erosions +
# dilations to remove any small regions of noise
thresh = cv2.threshold(gray, 45, 255, cv2.THRESH_BINARY)[1]
thresh = cv2.erode(thresh, None, iterations=2)
thresh = cv2.dilate(thresh, None, iterations=2)
# find contours in thresholded image, then grab the largest one
cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
c = max(cnts, key=cv2.contourArea)
# find the extreme points
extLeft = tuple(c[c[:, :, 0].argmin()][0])
extRight = tuple(c[c[:, :, 0].argmax()][0])
extTop = tuple(c[c[:, :, 1].argmin()][0])
extBot = tuple(c[c[:, :, 1].argmax()][0])
ADD_PIXELS = add_pixels_value
new_img = img[extTop[1]-ADD_PIXELS:extBot[1]+ADD_PIXELS, extLeft[0]-ADD_PIXELS:extRight[0]+ADD_PIXELS].copy()
set_new.append(new_img)
return np.array(set_new)
Below is the function to display image interpolation
#image interpolation code
img = cv2.imread('../input/brain-mri-images-for-brain-tumor-detection/brain_tumor_dataset/yes/Y108.jpg')
img = cv2.resize(
img,
dsize=IMG_SIZE,
interpolation=cv2.INTER_CUBIC
)
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
gray = cv2.GaussianBlur(gray, (5, 5), 0)
# threshold the image, then perform a series of erosions +
# dilations to remove any small regions of noise
thresh = cv2.threshold(gray, 45, 255, cv2.THRESH_BINARY)[1]
thresh = cv2.erode(thresh, None, iterations=2)
thresh = cv2.dilate(thresh, None, iterations=2)
# find contours in thresholded image, then grab the largest one
cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = imutils.grab_contours(cnts)
c = max(cnts, key=cv2.contourArea)
# find the extreme points
extLeft = tuple(c[c[:, :, 0].argmin()][0])
extRight = tuple(c[c[:, :, 0].argmax()][0])
extTop = tuple(c[c[:, :, 1].argmin()][0])
extBot = tuple(c[c[:, :, 1].argmax()][0])
# add contour on the image
img_cnt = cv2.drawContours(img.copy(), [c], -1, (0, 255, 255), 4)
# add extreme points
img_pnt = cv2.circle(img_cnt.copy(), extLeft, 8, (0, 0, 255), -1)
img_pnt = cv2.circle(img_pnt, extRight, 8, (0, 255, 0), -1)
img_pnt = cv2.circle(img_pnt, extTop, 8, (255, 0, 0), -1)
img_pnt = cv2.circle(img_pnt, extBot, 8, (255, 255, 0), -1)
# crop
ADD_PIXELS = 0
new_img = img[extTop[1]-ADD_PIXELS:extBot[1]+ADD_PIXELS, extLeft[0]-ADD_PIXELS:extRight[0]+ADD_PIXELS].copy()
plt.figure(figsize=(15,6))
plt.subplot(141)
plt.imshow(img)
plt.xticks([])
plt.yticks([])
plt.title('Step 1. Get the original image')
plt.subplot(142)
plt.imshow(img_cnt)
plt.xticks([])
plt.yticks([])
plt.title('Step 2. Find the biggest contour')
plt.subplot(143)
plt.imshow(img_pnt)
plt.xticks([])
plt.yticks([])
plt.title('Step 3. Find the extreme points')
plt.subplot(144)
plt.imshow(new_img)
plt.xticks([])
plt.yticks([])
plt.title('Step 4. Crop the image')
plt.show()
# apply this for each set
X_train_crop = crop_imgs(set_name=X_train)
X_val_crop = crop_imgs(set_name=X_val)
X_test_crop = crop_imgs(set_name=X_test)
#plot 10 images for each label in the training set
plot_samples(X_train_crop, y_train, labels, 10)
"""This function takes in an image dataset (x_set) with corresponding
class labels (y_set) and folder name to save the images in"""
def save_new_images(x_set, y_set, folder_name):
i = 0
#using a for loop to loop through each image in dataset and checks against class label
for (img, imclass) in zip(x_set, y_set):
# save image with corresponfing label in the specific folder
if imclass == 0:
cv2.imwrite(folder_name+'NO/'+str(i)+'.jpg', img)
else:
cv2.imwrite(folder_name+'YES/'+str(i)+'.jpg', img)
i += 1
# create new directories to store images
!mkdir TRAIN_CROP TEST_CROP VAL_CROP TRAIN_CROP/YES TRAIN_CROP/NO TEST_CROP/YES TEST_CROP/NO VAL_CROP/YES VAL_CROP/NO
#save the new cropped and resized image with the appropriate subdirectories
save_new_images(X_train_crop, y_train, folder_name='TRAIN_CROP/')
save_new_images(X_val_crop, y_val, folder_name='VAL_CROP/')
save_new_images(X_test_crop, y_test, folder_name='TEST_CROP/')
The next step would be resizing images to (224,224) and applying preprocessing needed for VGG-16 model input.
# defining a function to resize the image and apply vgg-16 preprocessing technique
def preprocess_imgs(set_name, img_size):
"""
Resize and apply VGG-16 preprocessing
"""
set_new = []
for img in set_name:
img = cv2.resize(
img,
dsize=img_size,
interpolation=cv2.INTER_CUBIC
)
set_new.append(preprocess_input(img))
# displays the preprocessed set of images as a NumPy array.
return np.array(set_new)
# creates new preprocessed image sets for training,testing and validation to the cropped image
X_train_prep = preprocess_imgs(set_name=X_train_crop, img_size=IMG_SIZE)
X_test_prep = preprocess_imgs(set_name=X_test_crop, img_size=IMG_SIZE)
X_val_prep = preprocess_imgs(set_name=X_val_crop, img_size=IMG_SIZE)
# generates gridplot of 10 images from preprocessing training set with corresponding labels
plot_samples(X_train_prep, y_train, labels, 10)
We will be using Transfer Learning with VGG-16 architecture , RESNET 50 ,InceptionV3 and weights as a base model.
Data Augmentation Data Augmentation helps to "increase" the size of training set.
Data Augmentation code is given down below
# define instance of "ImageDataGenerator" class to generate augmented images
demo_datagen = ImageDataGenerator(
rotation_range=15,
width_shift_range=0.05,
height_shift_range=0.05,
rescale=1./255,
shear_range=0.05,
brightness_range=[0.1, 1.5],
horizontal_flip=True,
vertical_flip=True
)
#generates augmented images using parameters set in the "demo_datagen" object
os.mkdir('preview')
x = X_train_crop[0]
x = x.reshape((1,) + x.shape)
i = 0
for batch in demo_datagen.flow(x, batch_size=1, save_to_dir='preview', save_prefix='aug_img', save_format='jpg'):
i += 1
# the for loop runs for 20 iterations generating 20 augmented images
if i > 20:
break
"""This below code uses the plt function to display an origin iamage
from the training set, and also generates and displays
a grid of augmented images using the demo_datagen generator"""
# display original image from the training set using the imshow() function.
plt.imshow(X_train_crop[0])
plt.xticks([])
plt.yticks([])
plt.title('Original Image')
plt.show()
plt.figure(figsize=(15,6))
i = 1
for img in os.listdir('preview/'):
img = cv2.cv2.imread('preview/' + img)
#convert the color space of the current image from BGR to RGB.
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.subplot(3,7,i)
plt.imshow(img)
plt.xticks([])
plt.yticks([])
i += 1
if i > 3*7:
break
plt.suptitle('Augemented Images')
plt.show()
#remove the directory "preview" and all contents
!rm -rf preview/
#Set the directories for training and validation sets
TRAIN_DIR = 'TRAIN_CROP/'
VAL_DIR = 'VAL_CROP/'
#Define an ImageDataGenerator object to augment training data and apply preprocessing
train_datagen = ImageDataGenerator(
rotation_range=15,
width_shift_range=0.1,
height_shift_range=0.1,
shear_range=0.1,
brightness_range=[0.5, 1.5],
horizontal_flip=True,
vertical_flip=True,
preprocessing_function=preprocess_input
)
test_datagen = ImageDataGenerator(
preprocessing_function=preprocess_input
)
#Generate batches of augmented and preprocessed images for training set
train_generator = train_datagen.flow_from_directory(
TRAIN_DIR,
color_mode='rgb',
target_size=IMG_SIZE,
batch_size=32,
class_mode='binary',
seed=RANDOM_SEED
)
#Generate batches of preprocessed images for validation set
validation_generator = test_datagen.flow_from_directory(
VAL_DIR,
color_mode='rgb',
target_size=IMG_SIZE,
batch_size=16,
class_mode='binary',
seed=RANDOM_SEED
)
Found 193 images belonging to 2 classes. Found 50 images belonging to 2 classes.
# load base model for resnet 50 and specify the location of model's weight files
ResNet50_weight_path = '../input/keras-pretrained-models/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5'
resnet50_x = ResNet50(
weights=ResNet50_weight_path,
include_top=False,
input_shape=IMG_SIZE + (3,)
)
/opt/conda/lib/python3.6/site-packages/keras_applications/resnet50.py:265: UserWarning: The output shape of `ResNet50(include_top=False)` has been changed since Keras 2.2.0.
# load base model for InceptionV3 and specify the location of model's weight files
InceptionV3_weight_path = '../input/keras-pretrained-models/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5'
inceptionV3 = InceptionV3(
weights=InceptionV3_weight_path,
include_top=False,
input_shape=IMG_SIZE + (3,)
)
# load base model and specify the location of model's weight files
vgg16_weight_path = '../input/keras-pretrained-models/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5'
vgg = VGG16(
weights=vgg16_weight_path,
include_top=False,
input_shape=IMG_SIZE + (3,)
)
#import different modules required to conduct model evaluation
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import math
import cv2
import matplotlib.pyplot as plt
import os
import seaborn as sns
import umap
from PIL import Image
from scipy import misc
from os import listdir
from os.path import isfile, join
import numpy as np
from scipy import misc
from random import shuffle
from collections import Counter
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras.utils.np_utils import to_categorical
from keras.layers import Input
#import necessary libraries and modules for image segmentation
import os
import sys
import random
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from tqdm import tqdm
from itertools import chain
from skimage.io import imread, imshow, imread_collection, concatenate_images
from skimage.transform import resize
from skimage.morphology import label
from keras.models import Model, load_model
from keras.layers import Input
from keras.layers.core import Dropout, Lambda
from keras.layers.convolutional import Conv2D, Conv2DTranspose
from keras.layers.pooling import MaxPooling2D
from keras.layers.merge import concatenate
from keras.callbacks import EarlyStopping, ModelCheckpoint
from keras import backend as K
import keras
# plot feature map of first conv layer for given image
from keras.applications.vgg16 import VGG16
from keras.applications.vgg16 import preprocess_input
from keras.preprocessing.image import load_img
from keras.preprocessing.image import img_to_array
from keras.models import Model
from matplotlib import pyplot
from numpy import expand_dims
f = plt.figure(figsize=(16,16))
# load the modelf = plt.figure(figsize=(10,3))
model = VGG16()
# redefine model to output right after the first hidden layer
model = Model(inputs=model.inputs, outputs=model.layers[1].output)
model.summary()
# load the image with the required shape
# convert the image to an array
img = img_to_array(X_val_prep[43])
# expand dimensions so that it represents a single 'sample'
img = expand_dims(img, axis=0)
# prepare the image (e.g. scale pixel values for the vgg)
img = preprocess_input(img)
# get feature map for first hidden layer
feature_maps = model.predict(img)
# plot all 64 maps in an 8x8 squares
square = 8
ix = 1
for _ in range(square):
for _ in range(square):
# specify subplot and turn of axis
ax = pyplot.subplot(square, square, ix)
ax.set_xticks([])
ax.set_yticks([])
# plot filter channel in grayscale
pyplot.imshow(feature_maps[0, :, :, ix-1], cmap='viridis')
ix += 1
# show the figure
pyplot.show()
Downloading data from https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels.h5 553467904/553467096 [==============================] - 17s 0us/step _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_4 (InputLayer) (None, 224, 224, 3) 0 _________________________________________________________________ block1_conv1 (Conv2D) (None, 224, 224, 64) 1792 ================================================================= Total params: 1,792 Trainable params: 1,792 Non-trainable params: 0 _________________________________________________________________
#define neural network model based on vgg16 architecture for binary image classifications
#Set the number of classes to 1 since it is a binary classification problem.
NUM_CLASSES = 1
#Add the pre-trained VGG16 model as the base of the new model.
vgg16 = Sequential()
vgg16.add(vgg)
vgg16.add(layers.Dropout(0.3))
vgg16.add(layers.Flatten())
vgg16.add(layers.Dropout(0.5))
vgg16.add(layers.Dense(NUM_CLASSES, activation='sigmoid'))
#Freeze the weights of the pre-trained VGG16 model so that they are not updated during training.
vgg16.layers[0].trainable = False
#Compile model with binary cross-entropy loss,optimizer and accuracy metric
vgg16.compile(
loss='binary_crossentropy',
optimizer=RMSprop(lr=1e-4),
metrics=['accuracy']
)
#Recompile the model
vgg16.compile(loss='binary_crossentropy', optimizer=keras.optimizers.Adam(lr=0.0003, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False), metrics=["accuracy"])
#display the summary of the vgg-16 model
vgg16.summary()
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= vgg16 (Model) (None, 7, 7, 512) 14714688 _________________________________________________________________ dropout_1 (Dropout) (None, 7, 7, 512) 0 _________________________________________________________________ flatten_1 (Flatten) (None, 25088) 0 _________________________________________________________________ dropout_2 (Dropout) (None, 25088) 0 _________________________________________________________________ dense_1 (Dense) (None, 1) 25089 ================================================================= Total params: 14,739,777 Trainable params: 25,089 Non-trainable params: 14,714,688 _________________________________________________________________
# visualize feature maps output from each block in the vgg model
from keras.applications.vgg16 import VGG16
from keras.applications.vgg16 import preprocess_input
from keras.preprocessing.image import load_img
from keras.preprocessing.image import img_to_array
from keras.models import Model
import matplotlib.pyplot as plt
from numpy import expand_dims
# load the model
model = VGG16()
# redefine model to output right after the first hidden layer
ixs = [2, 5, 9, 13, 17]
outputs = [model.layers[i].output for i in ixs]
model = Model(inputs=model.inputs, outputs=outputs)
# load the image with the required shape
# convert the image to an array
img = img_to_array(X_val_prep[43])
# expand dimensions so that it represents a single 'sample'
img = expand_dims(img, axis=0)
# prepare the image (e.g. scale pixel values for the vgg)
img = preprocess_input(img)
# get feature map for first hidden layer
feature_maps = model.predict(img)
# plot the output from each block
square = 8
for fmap in feature_maps:
# plot all 64 maps in an 8x8 squares
ix = 1
for _ in range(square):
plt.figure(figsize=(64,64))
for _ in range(square):
# specify subplot and turn of axis
ax = pyplot.subplot(square, square, ix)
ax.set_xticks([])
ax.set_yticks([])
# plot filter channel in grayscale
plt.imshow(fmap[0, :, :, ix-1], cmap='viridis')
ix += 1
# show the figure
plt.show()
#import the time module
import time
start = time.time()
# Fit the pre-trained module and save the training history
vgg16_history = vgg16.fit_generator(
train_generator,
steps_per_epoch=50,
epochs=120,
validation_data=validation_generator,
validation_steps=30,
)
# Calculate and print the time taken from the training process
end = time.time()
print(end - start)
Epoch 1/120 50/50 [==============================] - 23s 465ms/step - loss: 3.8053 - acc: 0.6630 - val_loss: 1.2723 - val_acc: 0.8377 Epoch 2/120 50/50 [==============================] - 19s 380ms/step - loss: 2.4119 - acc: 0.7622 - val_loss: 0.8067 - val_acc: 0.9022 Epoch 3/120 50/50 [==============================] - 20s 400ms/step - loss: 1.6685 - acc: 0.8314 - val_loss: 1.0082 - val_acc: 0.9005 Epoch 4/120 50/50 [==============================] - 20s 396ms/step - loss: 1.5545 - acc: 0.8387 - val_loss: 1.0361 - val_acc: 0.9185 Epoch 5/120 50/50 [==============================] - 20s 395ms/step - loss: 1.3324 - acc: 0.8550 - val_loss: 0.9498 - val_acc: 0.9215 Epoch 6/120 50/50 [==============================] - 20s 396ms/step - loss: 1.7177 - acc: 0.8304 - val_loss: 1.3479 - val_acc: 0.8804 Epoch 7/120 50/50 [==============================] - 20s 390ms/step - loss: 1.7780 - acc: 0.8460 - val_loss: 1.3686 - val_acc: 0.8560 Epoch 8/120 50/50 [==============================] - 20s 397ms/step - loss: 1.2049 - acc: 0.8901 - val_loss: 1.1806 - val_acc: 0.8641 Epoch 9/120 50/50 [==============================] - 20s 400ms/step - loss: 1.2286 - acc: 0.8838 - val_loss: 0.9910 - val_acc: 0.8979 Epoch 10/120 50/50 [==============================] - 20s 395ms/step - loss: 0.8854 - acc: 0.9147 - val_loss: 0.9084 - val_acc: 0.9022 Epoch 11/120 50/50 [==============================] - 20s 398ms/step - loss: 0.8716 - acc: 0.9072 - val_loss: 1.4376 - val_acc: 0.8377 Epoch 12/120 50/50 [==============================] - 20s 396ms/step - loss: 0.7945 - acc: 0.9154 - val_loss: 0.8093 - val_acc: 0.9022 Epoch 13/120 50/50 [==============================] - 20s 396ms/step - loss: 0.8091 - acc: 0.9014 - val_loss: 1.5205 - val_acc: 0.8403 Epoch 14/120 50/50 [==============================] - 19s 387ms/step - loss: 0.6924 - acc: 0.9126 - val_loss: 1.2646 - val_acc: 0.8804 Epoch 15/120 50/50 [==============================] - 20s 407ms/step - loss: 0.7640 - acc: 0.9279 - val_loss: 1.2711 - val_acc: 0.8403 Epoch 16/120 50/50 [==============================] - 20s 396ms/step - loss: 0.9084 - acc: 0.9133 - val_loss: 1.0379 - val_acc: 0.8967 Epoch 17/120 50/50 [==============================] - 20s 398ms/step - loss: 1.0699 - acc: 0.9083 - val_loss: 1.0581 - val_acc: 0.8822 Epoch 18/120 50/50 [==============================] - 20s 395ms/step - loss: 0.9147 - acc: 0.9083 - val_loss: 1.9530 - val_acc: 0.8152 Epoch 19/120 50/50 [==============================] - 20s 397ms/step - loss: 0.6259 - acc: 0.9379 - val_loss: 0.9707 - val_acc: 0.9188 Epoch 20/120 50/50 [==============================] - 20s 395ms/step - loss: 0.6738 - acc: 0.9273 - val_loss: 0.9392 - val_acc: 0.8832 Epoch 21/120 50/50 [==============================] - 19s 389ms/step - loss: 1.3774 - acc: 0.8728 - val_loss: 0.8869 - val_acc: 0.8822 Epoch 22/120 50/50 [==============================] - 20s 394ms/step - loss: 0.7836 - acc: 0.9248 - val_loss: 1.0201 - val_acc: 0.9158 Epoch 23/120 50/50 [==============================] - 20s 396ms/step - loss: 0.5858 - acc: 0.9423 - val_loss: 0.8326 - val_acc: 0.9188 Epoch 24/120 50/50 [==============================] - 20s 402ms/step - loss: 0.5689 - acc: 0.9404 - val_loss: 0.6883 - val_acc: 0.9212 Epoch 25/120 50/50 [==============================] - 20s 395ms/step - loss: 0.6002 - acc: 0.9246 - val_loss: 0.7534 - val_acc: 0.9005 Epoch 26/120 50/50 [==============================] - 20s 395ms/step - loss: 0.6520 - acc: 0.9304 - val_loss: 0.7606 - val_acc: 0.9022 Epoch 27/120 50/50 [==============================] - 20s 399ms/step - loss: 1.0530 - acc: 0.9018 - val_loss: 1.5530 - val_acc: 0.8822 Epoch 28/120 50/50 [==============================] - 19s 386ms/step - loss: 0.6501 - acc: 0.9354 - val_loss: 0.5536 - val_acc: 0.9185 Epoch 29/120 50/50 [==============================] - 20s 397ms/step - loss: 0.5326 - acc: 0.9486 - val_loss: 0.6481 - val_acc: 0.9215 Epoch 30/120 50/50 [==============================] - 20s 403ms/step - loss: 0.5065 - acc: 0.9448 - val_loss: 0.9057 - val_acc: 0.9185 Epoch 31/120 50/50 [==============================] - 20s 400ms/step - loss: 0.5650 - acc: 0.9423 - val_loss: 0.7568 - val_acc: 0.9188 Epoch 32/120 50/50 [==============================] - 20s 398ms/step - loss: 0.3942 - acc: 0.9536 - val_loss: 0.8809 - val_acc: 0.8995 Epoch 33/120 50/50 [==============================] - 20s 400ms/step - loss: 0.7860 - acc: 0.9158 - val_loss: 0.6478 - val_acc: 0.9215 Epoch 34/120 50/50 [==============================] - 20s 395ms/step - loss: 0.5063 - acc: 0.9480 - val_loss: 0.7240 - val_acc: 0.9185 Epoch 35/120 50/50 [==============================] - 19s 387ms/step - loss: 0.4215 - acc: 0.9567 - val_loss: 0.8374 - val_acc: 0.9215 Epoch 36/120 50/50 [==============================] - 20s 396ms/step - loss: 0.8096 - acc: 0.9346 - val_loss: 0.8005 - val_acc: 0.9402 Epoch 37/120 50/50 [==============================] - 20s 396ms/step - loss: 0.4159 - acc: 0.9561 - val_loss: 0.9302 - val_acc: 0.9424 Epoch 38/120 50/50 [==============================] - 20s 395ms/step - loss: 0.5249 - acc: 0.9309 - val_loss: 0.9981 - val_acc: 0.9375 Epoch 39/120 50/50 [==============================] - 20s 399ms/step - loss: 0.7666 - acc: 0.9290 - val_loss: 0.9242 - val_acc: 0.9241 Epoch 40/120 50/50 [==============================] - 20s 401ms/step - loss: 0.4119 - acc: 0.9555 - val_loss: 1.0648 - val_acc: 0.9158 Epoch 41/120 50/50 [==============================] - 20s 397ms/step - loss: 0.4256 - acc: 0.9624 - val_loss: 1.0710 - val_acc: 0.9188 Epoch 42/120 50/50 [==============================] - 19s 386ms/step - loss: 0.3306 - acc: 0.9680 - val_loss: 0.9294 - val_acc: 0.9402 Epoch 43/120 50/50 [==============================] - 20s 397ms/step - loss: 0.3610 - acc: 0.9618 - val_loss: 0.8817 - val_acc: 0.9241 Epoch 44/120 50/50 [==============================] - 20s 396ms/step - loss: 0.4075 - acc: 0.9586 - val_loss: 0.9671 - val_acc: 0.9158 Epoch 45/120 50/50 [==============================] - 20s 398ms/step - loss: 0.4077 - acc: 0.9459 - val_loss: 1.1745 - val_acc: 0.8822 Epoch 46/120 50/50 [==============================] - 20s 406ms/step - loss: 0.3689 - acc: 0.9630 - val_loss: 1.0926 - val_acc: 0.8967 Epoch 47/120 50/50 [==============================] - 20s 395ms/step - loss: 0.7979 - acc: 0.9214 - val_loss: 1.1617 - val_acc: 0.8770 Epoch 48/120 50/50 [==============================] - 20s 400ms/step - loss: 0.7293 - acc: 0.9221 - val_loss: 1.0106 - val_acc: 0.9022 Epoch 49/120 50/50 [==============================] - 19s 389ms/step - loss: 0.4175 - acc: 0.9599 - val_loss: 1.0376 - val_acc: 0.9005 Epoch 50/120 50/50 [==============================] - 20s 393ms/step - loss: 0.4982 - acc: 0.9346 - val_loss: 1.0653 - val_acc: 0.8995 Epoch 51/120 50/50 [==============================] - 20s 397ms/step - loss: 0.4796 - acc: 0.9549 - val_loss: 1.1142 - val_acc: 0.9241 Epoch 52/120 50/50 [==============================] - 20s 396ms/step - loss: 0.3038 - acc: 0.9687 - val_loss: 1.2214 - val_acc: 0.9158 Epoch 53/120 50/50 [==============================] - 20s 397ms/step - loss: 0.4105 - acc: 0.9580 - val_loss: 1.0275 - val_acc: 0.9241 Epoch 54/120 50/50 [==============================] - 20s 396ms/step - loss: 0.5019 - acc: 0.9377 - val_loss: 1.0555 - val_acc: 0.8967 Epoch 55/120 50/50 [==============================] - 20s 398ms/step - loss: 0.3302 - acc: 0.9693 - val_loss: 1.1818 - val_acc: 0.9031 Epoch 56/120 50/50 [==============================] - 20s 392ms/step - loss: 0.3520 - acc: 0.9624 - val_loss: 1.1079 - val_acc: 0.9185 Epoch 57/120 50/50 [==============================] - 20s 397ms/step - loss: 0.3235 - acc: 0.9705 - val_loss: 0.9348 - val_acc: 0.9058 Epoch 58/120 50/50 [==============================] - 20s 395ms/step - loss: 0.3334 - acc: 0.9630 - val_loss: 1.3502 - val_acc: 0.9130 Epoch 59/120 50/50 [==============================] - 20s 398ms/step - loss: 0.3708 - acc: 0.9611 - val_loss: 1.1000 - val_acc: 0.9188 Epoch 60/120 50/50 [==============================] - 20s 396ms/step - loss: 0.4370 - acc: 0.9605 - val_loss: 1.1498 - val_acc: 0.9212 Epoch 61/120 50/50 [==============================] - 20s 406ms/step - loss: 0.7381 - acc: 0.9282 - val_loss: 1.8898 - val_acc: 0.8377 Epoch 62/120 50/50 [==============================] - 20s 395ms/step - loss: 0.3586 - acc: 0.9611 - val_loss: 1.2027 - val_acc: 0.9212 Epoch 63/120 50/50 [==============================] - 20s 391ms/step - loss: 0.3488 - acc: 0.9655 - val_loss: 1.2558 - val_acc: 0.9188 Epoch 64/120 50/50 [==============================] - 20s 396ms/step - loss: 0.3354 - acc: 0.9687 - val_loss: 1.2708 - val_acc: 0.9022 Epoch 65/120 50/50 [==============================] - 20s 398ms/step - loss: 0.3143 - acc: 0.9687 - val_loss: 1.0227 - val_acc: 0.9188 Epoch 66/120 50/50 [==============================] - 20s 397ms/step - loss: 0.3110 - acc: 0.9668 - val_loss: 0.9154 - val_acc: 0.8995 Epoch 67/120 50/50 [==============================] - 20s 395ms/step - loss: 0.3912 - acc: 0.9528 - val_loss: 1.0121 - val_acc: 0.9188 Epoch 68/120 50/50 [==============================] - 20s 396ms/step - loss: 0.7075 - acc: 0.9428 - val_loss: 1.3371 - val_acc: 0.9022 Epoch 69/120 50/50 [==============================] - 20s 399ms/step - loss: 0.3543 - acc: 0.9699 - val_loss: 0.9375 - val_acc: 0.9162 Epoch 70/120 50/50 [==============================] - 19s 389ms/step - loss: 0.3100 - acc: 0.9737 - val_loss: 0.8687 - val_acc: 0.9239 Epoch 71/120 50/50 [==============================] - 20s 398ms/step - loss: 0.3082 - acc: 0.9687 - val_loss: 1.3978 - val_acc: 0.8979 Epoch 72/120 50/50 [==============================] - 20s 398ms/step - loss: 0.3934 - acc: 0.9611 - val_loss: 1.2311 - val_acc: 0.9212 Epoch 73/120 50/50 [==============================] - 20s 398ms/step - loss: 0.5847 - acc: 0.9528 - val_loss: 1.2013 - val_acc: 0.9241 Epoch 74/120 50/50 [==============================] - 20s 392ms/step - loss: 0.3559 - acc: 0.9540 - val_loss: 1.7118 - val_acc: 0.8533 Epoch 75/120 50/50 [==============================] - 20s 397ms/step - loss: 0.3889 - acc: 0.9630 - val_loss: 1.1191 - val_acc: 0.9241 Epoch 76/120 50/50 [==============================] - 20s 408ms/step - loss: 0.3818 - acc: 0.9630 - val_loss: 1.3536 - val_acc: 0.9158 Epoch 77/120 50/50 [==============================] - 19s 390ms/step - loss: 0.5668 - acc: 0.9534 - val_loss: 1.1021 - val_acc: 0.9188 Epoch 78/120 50/50 [==============================] - 20s 396ms/step - loss: 0.2614 - acc: 0.9768 - val_loss: 1.4529 - val_acc: 0.9022 Epoch 79/120 50/50 [==============================] - 20s 399ms/step - loss: 0.3520 - acc: 0.9712 - val_loss: 1.5437 - val_acc: 0.9005 Epoch 80/120 50/50 [==============================] - 20s 396ms/step - loss: 0.3854 - acc: 0.9636 - val_loss: 1.6519 - val_acc: 0.8804 Epoch 81/120 50/50 [==============================] - 20s 399ms/step - loss: 0.3033 - acc: 0.9724 - val_loss: 1.4156 - val_acc: 0.8979 Epoch 82/120 50/50 [==============================] - 20s 395ms/step - loss: 0.5489 - acc: 0.9584 - val_loss: 1.2817 - val_acc: 0.9022 Epoch 83/120 50/50 [==============================] - 20s 395ms/step - loss: 0.5895 - acc: 0.9559 - val_loss: 1.2842 - val_acc: 0.9005 Epoch 84/120 50/50 [==============================] - 19s 389ms/step - loss: 0.3135 - acc: 0.9705 - val_loss: 1.4858 - val_acc: 0.8995 Epoch 85/120 50/50 [==============================] - 20s 399ms/step - loss: 0.2817 - acc: 0.9724 - val_loss: 1.3413 - val_acc: 0.9005 Epoch 86/120 50/50 [==============================] - 20s 397ms/step - loss: 0.2267 - acc: 0.9768 - val_loss: 1.2736 - val_acc: 0.9185 Epoch 87/120 50/50 [==============================] - 20s 402ms/step - loss: 0.2235 - acc: 0.9793 - val_loss: 1.3849 - val_acc: 0.9058 Epoch 88/120 50/50 [==============================] - 20s 395ms/step - loss: 0.5580 - acc: 0.9472 - val_loss: 1.6923 - val_acc: 0.8940 Epoch 89/120 50/50 [==============================] - 20s 398ms/step - loss: 0.4868 - acc: 0.9522 - val_loss: 1.6380 - val_acc: 0.8979 Epoch 90/120 50/50 [==============================] - 20s 396ms/step - loss: 0.2514 - acc: 0.9743 - val_loss: 1.6065 - val_acc: 0.8804 Epoch 91/120 50/50 [==============================] - 19s 388ms/step - loss: 0.4432 - acc: 0.9484 - val_loss: 2.0195 - val_acc: 0.8429 Epoch 92/120 50/50 [==============================] - 20s 408ms/step - loss: 0.3442 - acc: 0.9674 - val_loss: 1.4496 - val_acc: 0.8995 Epoch 93/120 50/50 [==============================] - 20s 398ms/step - loss: 0.2814 - acc: 0.9705 - val_loss: 1.5425 - val_acc: 0.8953 Epoch 94/120 50/50 [==============================] - 20s 396ms/step - loss: 0.3542 - acc: 0.9655 - val_loss: 0.7917 - val_acc: 0.9239 Epoch 95/120 50/50 [==============================] - 20s 396ms/step - loss: 0.3594 - acc: 0.9668 - val_loss: 1.1639 - val_acc: 0.9005 Epoch 96/120 50/50 [==============================] - 20s 396ms/step - loss: 0.1911 - acc: 0.9787 - val_loss: 1.3738 - val_acc: 0.8995 Epoch 97/120 50/50 [==============================] - 20s 398ms/step - loss: 0.3146 - acc: 0.9730 - val_loss: 1.2629 - val_acc: 0.9005 Epoch 98/120 50/50 [==============================] - 20s 391ms/step - loss: 0.2567 - acc: 0.9787 - val_loss: 1.4887 - val_acc: 0.8995 Epoch 99/120 50/50 [==============================] - 20s 396ms/step - loss: 0.2762 - acc: 0.9756 - val_loss: 1.5592 - val_acc: 0.8953 Epoch 100/120 50/50 [==============================] - 20s 396ms/step - loss: 0.2983 - acc: 0.9705 - val_loss: 1.2800 - val_acc: 0.9049 Epoch 101/120 50/50 [==============================] - 20s 401ms/step - loss: 0.1720 - acc: 0.9824 - val_loss: 1.1298 - val_acc: 0.8979 Epoch 102/120 50/50 [==============================] - 20s 397ms/step - loss: 0.2629 - acc: 0.9743 - val_loss: 1.0903 - val_acc: 0.9022 Epoch 103/120 50/50 [==============================] - 20s 399ms/step - loss: 0.1771 - acc: 0.9787 - val_loss: 1.0924 - val_acc: 0.8979 Epoch 104/120 50/50 [==============================] - 20s 396ms/step - loss: 0.3071 - acc: 0.9724 - val_loss: 1.2204 - val_acc: 0.9022 Epoch 105/120 50/50 [==============================] - 19s 389ms/step - loss: 0.2571 - acc: 0.9749 - val_loss: 1.5547 - val_acc: 0.9031 Epoch 106/120 50/50 [==============================] - 20s 397ms/step - loss: 0.2865 - acc: 0.9743 - val_loss: 0.8661 - val_acc: 0.9158 Epoch 107/120 50/50 [==============================] - 20s 409ms/step - loss: 0.2103 - acc: 0.9799 - val_loss: 1.5202 - val_acc: 0.9031 Epoch 108/120 50/50 [==============================] - 20s 395ms/step - loss: 0.2559 - acc: 0.9737 - val_loss: 0.9637 - val_acc: 0.9375 Epoch 109/120 50/50 [==============================] - 20s 402ms/step - loss: 0.2079 - acc: 0.9781 - val_loss: 1.1413 - val_acc: 0.9215 Epoch 110/120 50/50 [==============================] - 20s 395ms/step - loss: 0.2491 - acc: 0.9781 - val_loss: 0.9576 - val_acc: 0.9375 Epoch 111/120 50/50 [==============================] - 20s 396ms/step - loss: 0.1944 - acc: 0.9799 - val_loss: 1.1748 - val_acc: 0.9215 Epoch 112/120 50/50 [==============================] - 19s 387ms/step - loss: 0.2295 - acc: 0.9793 - val_loss: 1.1036 - val_acc: 0.9185 Epoch 113/120 50/50 [==============================] - 20s 397ms/step - loss: 0.2246 - acc: 0.9768 - val_loss: 1.0404 - val_acc: 0.9215 Epoch 114/120 50/50 [==============================] - 20s 396ms/step - loss: 0.2056 - acc: 0.9787 - val_loss: 0.8814 - val_acc: 0.9185 Epoch 115/120 50/50 [==============================] - 20s 401ms/step - loss: 0.8395 - acc: 0.9351 - val_loss: 0.7018 - val_acc: 0.9424 Epoch 116/120 50/50 [==============================] - 20s 396ms/step - loss: 0.3106 - acc: 0.9724 - val_loss: 1.2547 - val_acc: 0.9185 Epoch 117/120 50/50 [==============================] - 20s 399ms/step - loss: 0.2502 - acc: 0.9749 - val_loss: 1.3421 - val_acc: 0.9005 Epoch 118/120 50/50 [==============================] - 20s 401ms/step - loss: 0.1547 - acc: 0.9793 - val_loss: 1.3073 - val_acc: 0.9185 Epoch 119/120 50/50 [==============================] - 19s 389ms/step - loss: 0.2412 - acc: 0.9768 - val_loss: 1.2172 - val_acc: 0.9241 Epoch 120/120 50/50 [==============================] - 20s 395ms/step - loss: 0.3027 - acc: 0.9724 - val_loss: 1.3502 - val_acc: 0.9158 2381.356447458267
# perform predictions on validation set using the trained vgg16 model
predictions = vgg16.predict(X_test_prep)
predictions = [1 if x>0.5 else 0 for x in predictions]
_, train_acc = vgg16.evaluate(X_val_prep, y_val, verbose=0)
_, test_acc = vgg16.evaluate(X_test_prep, y_test, verbose=0)
pyplot.figure(figsize=(12,12))
# plot loss of vgg-16 model during training
pyplot.subplot(211)
pyplot.title('Vgg16 Loss')
pyplot.plot(vgg16_history.history['loss'], label='train')
pyplot.plot(vgg16_history.history['val_loss'], label='Validation')
pyplot.legend()
# plot accuracy of vgg-16 model during training
pyplot.subplot(212)
pyplot.title('Vgg16 Accuracy')
pyplot.plot(vgg16_history.history['acc'], label='train')
pyplot.plot(vgg16_history.history['val_acc'], label='Validation')
pyplot.legend()
pyplot.show()
print('Train: %.3f, Test: %.3f' % (train_acc, test_acc))
Train: 0.900, Test: 0.800
#import necessary libraries and calculates the evaluation metrics
from sklearn.datasets import make_circles
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import f1_score
from sklearn.metrics import cohen_kappa_score
from sklearn.metrics import roc_auc_score
from sklearn.metrics import confusion_matrix
from keras.models import Sequential
from keras.layers import Dense
# accuracy: (tp + tn) / (p + n)
accuracy = accuracy_score(y_test, predictions)
print('Accuracy: %f' % accuracy)
# precision tp / (tp + fp)
precision = precision_score(y_test, predictions)
print('Precision: %f' % precision)
# recall: tp / (tp + fn)
recall = recall_score(y_test, predictions)
print('Recall: %f' % recall)
# f1: 2 tp / (2 tp + fp + fn)
f1 = f1_score(y_test, predictions)
print('F1 score: %f' % f1)
Accuracy: 0.800000 Precision: 0.714286 Recall: 1.000000 F1 score: 0.833333
#measure the agreement between predicted and true labels in classification
kappa = cohen_kappa_score(y_test, predictions)
print('Cohens kappa: %f' % kappa)
# ROC AUC
auc = roc_auc_score(y_test, predictions)
print('ROC AUC: %f' % auc)
# confusion matrix
matrix = confusion_matrix(y_test, predictions)
print(matrix)
Cohens kappa: 0.600000 ROC AUC: 0.800000 [[3 2] [0 5]]
#define an InceptionV3 as a base model
#initialize the number of classes to 1
NUM_CLASSES = 1
#add InceptionV3 models as the first layer of the sequential model
inception_v3 = Sequential()
inception_v3.add(inceptionV3)
inception_v3.add(layers.Dropout(0.3))
inception_v3.add(layers.Flatten())
inception_v3.add(layers.Dropout(0.5))
inception_v3.add(layers.Dense(NUM_CLASSES, activation='sigmoid'))
inception_v3.layers[0].trainable = False
#compile the module using loss,optimizer and metrics
inception_v3.compile(
loss='binary_crossentropy',
optimizer=RMSprop(lr=1e-4),
metrics=['accuracy']
)
#print the model summary
inception_v3.summary()
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= inception_v3 (Model) (None, 5, 5, 2048) 21802784 _________________________________________________________________ dropout_3 (Dropout) (None, 5, 5, 2048) 0 _________________________________________________________________ flatten_2 (Flatten) (None, 51200) 0 _________________________________________________________________ dropout_4 (Dropout) (None, 51200) 0 _________________________________________________________________ dense_2 (Dense) (None, 1) 51201 ================================================================= Total params: 21,853,985 Trainable params: 51,201 Non-trainable params: 21,802,784 _________________________________________________________________
#import the time module
import time
start = time.time()
#train the model using generator with the necessary input parameters
inception_v3_history = inception_v3.fit_generator(
train_generator,
steps_per_epoch=50,
epochs=120,
validation_data=validation_generator,
validation_steps=30,
)
#record the end time for training
end = time.time()
print(end - start)
Epoch 1/120 50/50 [==============================] - 29s 582ms/step - loss: 0.8512 - acc: 0.5510 - val_loss: 3.7428 - val_acc: 0.6780 Epoch 2/120 50/50 [==============================] - 18s 363ms/step - loss: 0.6595 - acc: 0.6659 - val_loss: 5.0402 - val_acc: 0.6495 Epoch 3/120 50/50 [==============================] - 20s 401ms/step - loss: 0.5614 - acc: 0.7428 - val_loss: 4.9734 - val_acc: 0.6309 Epoch 4/120 50/50 [==============================] - 20s 391ms/step - loss: 0.6028 - acc: 0.6778 - val_loss: 3.3086 - val_acc: 0.7446 Epoch 5/120 50/50 [==============================] - 20s 401ms/step - loss: 0.5405 - acc: 0.7165 - val_loss: 4.7239 - val_acc: 0.6387 Epoch 6/120 50/50 [==============================] - 20s 399ms/step - loss: 0.5219 - acc: 0.7499 - val_loss: 6.1517 - val_acc: 0.6141 Epoch 7/120 50/50 [==============================] - 20s 396ms/step - loss: 0.4492 - acc: 0.7967 - val_loss: 5.9680 - val_acc: 0.6257 Epoch 8/120 50/50 [==============================] - 20s 402ms/step - loss: 0.5553 - acc: 0.7155 - val_loss: 5.8356 - val_acc: 0.6196 Epoch 9/120 50/50 [==============================] - 20s 401ms/step - loss: 0.4714 - acc: 0.7460 - val_loss: 4.4505 - val_acc: 0.6597 Epoch 10/120 50/50 [==============================] - 20s 404ms/step - loss: 0.4334 - acc: 0.7817 - val_loss: 5.8855 - val_acc: 0.6223 Epoch 11/120 50/50 [==============================] - 20s 394ms/step - loss: 0.4654 - acc: 0.7758 - val_loss: 5.8406 - val_acc: 0.6204 Epoch 12/120 50/50 [==============================] - 20s 401ms/step - loss: 0.4025 - acc: 0.8235 - val_loss: 4.4230 - val_acc: 0.6766 Epoch 13/120 50/50 [==============================] - 20s 405ms/step - loss: 0.3958 - acc: 0.8126 - val_loss: 5.4333 - val_acc: 0.6204 Epoch 14/120 50/50 [==============================] - 20s 398ms/step - loss: 0.3424 - acc: 0.8650 - val_loss: 5.0815 - val_acc: 0.6549 Epoch 15/120 50/50 [==============================] - 20s 399ms/step - loss: 0.4009 - acc: 0.8195 - val_loss: 4.6632 - val_acc: 0.6649 Epoch 16/120 50/50 [==============================] - 21s 411ms/step - loss: 0.4719 - acc: 0.7684 - val_loss: 5.5408 - val_acc: 0.6250 Epoch 17/120 50/50 [==============================] - 20s 399ms/step - loss: 0.3648 - acc: 0.8481 - val_loss: 3.7347 - val_acc: 0.7225 Epoch 18/120 50/50 [==============================] - 20s 392ms/step - loss: 0.3928 - acc: 0.8245 - val_loss: 3.7402 - val_acc: 0.7283 Epoch 19/120 50/50 [==============================] - 20s 401ms/step - loss: 0.3642 - acc: 0.8329 - val_loss: 6.0932 - val_acc: 0.6178 Epoch 20/120 50/50 [==============================] - 20s 402ms/step - loss: 0.4140 - acc: 0.7986 - val_loss: 4.9106 - val_acc: 0.6685 Epoch 21/120 50/50 [==============================] - 20s 404ms/step - loss: 0.3825 - acc: 0.8011 - val_loss: 3.8920 - val_acc: 0.6754 Epoch 22/120 50/50 [==============================] - 20s 398ms/step - loss: 0.4149 - acc: 0.7663 - val_loss: 4.7487 - val_acc: 0.6630 Epoch 23/120 50/50 [==============================] - 20s 401ms/step - loss: 0.2995 - acc: 0.8707 - val_loss: 4.2359 - val_acc: 0.6990 Epoch 24/120 50/50 [==============================] - 20s 403ms/step - loss: 0.4085 - acc: 0.7968 - val_loss: 5.0017 - val_acc: 0.6576 Epoch 25/120 50/50 [==============================] - 20s 394ms/step - loss: 0.3912 - acc: 0.7731 - val_loss: 5.8305 - val_acc: 0.6178 Epoch 26/120 50/50 [==============================] - 20s 402ms/step - loss: 0.3539 - acc: 0.8149 - val_loss: 3.8895 - val_acc: 0.7582 Epoch 27/120 50/50 [==============================] - 20s 405ms/step - loss: 0.3795 - acc: 0.8592 - val_loss: 4.6304 - val_acc: 0.6649 Epoch 28/120 50/50 [==============================] - 20s 404ms/step - loss: 0.3927 - acc: 0.8187 - val_loss: 5.3832 - val_acc: 0.6141 Epoch 29/120 50/50 [==============================] - 20s 402ms/step - loss: 0.3290 - acc: 0.8440 - val_loss: 5.8738 - val_acc: 0.6257 Epoch 30/120 50/50 [==============================] - 20s 398ms/step - loss: 0.3515 - acc: 0.8408 - val_loss: 5.8522 - val_acc: 0.6141 Epoch 31/120 50/50 [==============================] - 21s 411ms/step - loss: 0.3401 - acc: 0.8483 - val_loss: 5.3322 - val_acc: 0.6257 Epoch 32/120 50/50 [==============================] - 20s 392ms/step - loss: 0.2957 - acc: 0.8907 - val_loss: 5.5207 - val_acc: 0.6304 Epoch 33/120 50/50 [==============================] - 20s 399ms/step - loss: 0.2698 - acc: 0.8870 - val_loss: 5.3119 - val_acc: 0.6257 Epoch 34/120 50/50 [==============================] - 20s 400ms/step - loss: 0.2688 - acc: 0.8813 - val_loss: 4.9041 - val_acc: 0.6603 Epoch 35/120 50/50 [==============================] - 20s 399ms/step - loss: 0.3274 - acc: 0.8440 - val_loss: 5.2702 - val_acc: 0.6414 Epoch 36/120 50/50 [==============================] - 20s 397ms/step - loss: 0.3375 - acc: 0.8352 - val_loss: 5.4106 - val_acc: 0.6413 Epoch 37/120 50/50 [==============================] - 20s 401ms/step - loss: 0.3221 - acc: 0.8780 - val_loss: 5.4962 - val_acc: 0.6387 Epoch 38/120 50/50 [==============================] - 20s 402ms/step - loss: 0.2644 - acc: 0.8914 - val_loss: 4.1880 - val_acc: 0.6957 Epoch 39/120 50/50 [==============================] - 20s 393ms/step - loss: 0.4033 - acc: 0.8174 - val_loss: 4.3248 - val_acc: 0.7042 Epoch 40/120 50/50 [==============================] - 20s 401ms/step - loss: 0.3713 - acc: 0.8262 - val_loss: 5.2131 - val_acc: 0.6413 Epoch 41/120 50/50 [==============================] - 20s 400ms/step - loss: 0.3611 - acc: 0.8281 - val_loss: 4.9199 - val_acc: 0.6597 Epoch 42/120 50/50 [==============================] - 20s 397ms/step - loss: 0.2676 - acc: 0.8699 - val_loss: 5.0715 - val_acc: 0.6413 Epoch 43/120 50/50 [==============================] - 20s 398ms/step - loss: 0.3464 - acc: 0.8471 - val_loss: 4.5348 - val_acc: 0.6623 Epoch 44/120 50/50 [==============================] - 20s 408ms/step - loss: 0.3279 - acc: 0.8527 - val_loss: 5.5928 - val_acc: 0.6386 Epoch 45/120 50/50 [==============================] - 20s 402ms/step - loss: 0.2486 - acc: 0.8730 - val_loss: 4.2101 - val_acc: 0.6571 Epoch 46/120 50/50 [==============================] - 19s 389ms/step - loss: 0.2005 - acc: 0.9348 - val_loss: 4.5965 - val_acc: 0.6440 Epoch 47/120 50/50 [==============================] - 21s 413ms/step - loss: 0.2715 - acc: 0.9051 - val_loss: 5.7812 - val_acc: 0.6361 Epoch 48/120 50/50 [==============================] - 20s 401ms/step - loss: 0.2276 - acc: 0.9154 - val_loss: 4.0498 - val_acc: 0.7255 Epoch 49/120 50/50 [==============================] - 20s 401ms/step - loss: 0.3431 - acc: 0.8805 - val_loss: 4.0511 - val_acc: 0.6780 Epoch 50/120 50/50 [==============================] - 20s 398ms/step - loss: 0.2850 - acc: 0.8811 - val_loss: 4.8323 - val_acc: 0.6576 Epoch 51/120 50/50 [==============================] - 20s 398ms/step - loss: 0.3821 - acc: 0.8634 - val_loss: 4.1731 - val_acc: 0.6623 Epoch 52/120 50/50 [==============================] - 20s 399ms/step - loss: 0.3450 - acc: 0.8596 - val_loss: 5.7541 - val_acc: 0.6359 Epoch 53/120 50/50 [==============================] - 20s 391ms/step - loss: 0.2889 - acc: 0.8754 - val_loss: 5.4090 - val_acc: 0.6440 Epoch 54/120 50/50 [==============================] - 20s 394ms/step - loss: 0.3629 - acc: 0.8279 - val_loss: 5.4152 - val_acc: 0.6603 Epoch 55/120 50/50 [==============================] - 20s 401ms/step - loss: 0.2962 - acc: 0.8786 - val_loss: 5.6758 - val_acc: 0.6440 Epoch 56/120 50/50 [==============================] - 20s 398ms/step - loss: 0.2988 - acc: 0.8438 - val_loss: 5.7651 - val_acc: 0.6386 Epoch 57/120 50/50 [==============================] - 20s 397ms/step - loss: 0.3811 - acc: 0.8475 - val_loss: 5.6442 - val_acc: 0.6414 Epoch 58/120 50/50 [==============================] - 20s 396ms/step - loss: 0.2413 - acc: 0.8824 - val_loss: 5.5543 - val_acc: 0.6413 Epoch 59/120 50/50 [==============================] - 20s 402ms/step - loss: 0.3148 - acc: 0.8774 - val_loss: 5.4927 - val_acc: 0.6387 Epoch 60/120 50/50 [==============================] - 20s 390ms/step - loss: 0.2945 - acc: 0.8272 - val_loss: 5.4411 - val_acc: 0.6413 Epoch 61/120 50/50 [==============================] - 20s 401ms/step - loss: 0.2584 - acc: 0.9020 - val_loss: 5.4672 - val_acc: 0.6571 Epoch 62/120 50/50 [==============================] - 20s 407ms/step - loss: 0.3365 - acc: 0.8507 - val_loss: 5.3793 - val_acc: 0.6522 Epoch 63/120 50/50 [==============================] - 20s 400ms/step - loss: 0.2388 - acc: 0.9146 - val_loss: 5.6363 - val_acc: 0.6309 Epoch 64/120 50/50 [==============================] - 20s 398ms/step - loss: 0.3530 - acc: 0.8563 - val_loss: 5.4939 - val_acc: 0.6413 Epoch 65/120 50/50 [==============================] - 20s 398ms/step - loss: 0.2700 - acc: 0.8912 - val_loss: 5.0003 - val_acc: 0.6571 Epoch 66/120 50/50 [==============================] - 20s 396ms/step - loss: 0.3520 - acc: 0.8684 - val_loss: 5.4318 - val_acc: 0.6440 Epoch 67/120 50/50 [==============================] - 20s 393ms/step - loss: 0.3887 - acc: 0.8659 - val_loss: 5.5956 - val_acc: 0.6361 Epoch 68/120 50/50 [==============================] - 20s 400ms/step - loss: 0.2959 - acc: 0.8519 - val_loss: 5.3772 - val_acc: 0.6630 Epoch 69/120 50/50 [==============================] - 20s 398ms/step - loss: 0.2770 - acc: 0.8634 - val_loss: 5.5141 - val_acc: 0.6387 Epoch 70/120 50/50 [==============================] - 20s 395ms/step - loss: 0.2684 - acc: 0.8628 - val_loss: 5.3719 - val_acc: 0.6630 Epoch 71/120 50/50 [==============================] - 20s 398ms/step - loss: 0.2281 - acc: 0.8874 - val_loss: 5.4672 - val_acc: 0.6571 Epoch 72/120 50/50 [==============================] - 20s 401ms/step - loss: 0.3197 - acc: 0.8678 - val_loss: 5.4950 - val_acc: 0.6386 Epoch 73/120 50/50 [==============================] - 20s 398ms/step - loss: 0.2369 - acc: 0.9077 - val_loss: 5.4337 - val_acc: 0.6597 Epoch 74/120 50/50 [==============================] - 19s 389ms/step - loss: 0.3284 - acc: 0.8728 - val_loss: 5.5265 - val_acc: 0.6413 Epoch 75/120 50/50 [==============================] - 20s 403ms/step - loss: 0.3254 - acc: 0.8830 - val_loss: 5.4259 - val_acc: 0.6597 Epoch 76/120 50/50 [==============================] - 20s 398ms/step - loss: 0.2739 - acc: 0.8780 - val_loss: 5.6991 - val_acc: 0.6114 Epoch 77/120 50/50 [==============================] - 21s 413ms/step - loss: 0.3055 - acc: 0.8678 - val_loss: 5.3913 - val_acc: 0.6283 Epoch 78/120 50/50 [==============================] - 20s 395ms/step - loss: 0.2121 - acc: 0.9183 - val_loss: 5.5214 - val_acc: 0.6359 Epoch 79/120 50/50 [==============================] - 20s 400ms/step - loss: 0.2640 - acc: 0.8734 - val_loss: 5.6795 - val_acc: 0.6440 Epoch 80/120 50/50 [==============================] - 20s 398ms/step - loss: 0.3845 - acc: 0.8507 - val_loss: 5.5612 - val_acc: 0.6386 Epoch 81/120 50/50 [==============================] - 20s 393ms/step - loss: 0.2385 - acc: 0.8892 - val_loss: 5.4318 - val_acc: 0.6597 Epoch 82/120 50/50 [==============================] - 20s 396ms/step - loss: 0.2532 - acc: 0.8734 - val_loss: 5.5589 - val_acc: 0.6196 Epoch 83/120 50/50 [==============================] - 20s 399ms/step - loss: 0.2760 - acc: 0.8684 - val_loss: 6.0097 - val_acc: 0.6230 Epoch 84/120 50/50 [==============================] - 20s 401ms/step - loss: 0.3367 - acc: 0.8678 - val_loss: 5.3487 - val_acc: 0.6658 Epoch 85/120 50/50 [==============================] - 20s 401ms/step - loss: 0.2644 - acc: 0.9006 - val_loss: 5.6941 - val_acc: 0.6335 Epoch 86/120 50/50 [==============================] - 20s 396ms/step - loss: 0.3747 - acc: 0.8279 - val_loss: 5.4162 - val_acc: 0.6603 Epoch 87/120 50/50 [==============================] - 20s 405ms/step - loss: 0.2900 - acc: 0.8444 - val_loss: 5.6531 - val_acc: 0.6178 Epoch 88/120 50/50 [==============================] - 19s 388ms/step - loss: 0.2958 - acc: 0.8753 - val_loss: 5.4398 - val_acc: 0.6440 Epoch 89/120 50/50 [==============================] - 20s 403ms/step - loss: 0.1967 - acc: 0.9304 - val_loss: 6.0514 - val_acc: 0.6204 Epoch 90/120 50/50 [==============================] - 20s 399ms/step - loss: 0.2873 - acc: 0.8805 - val_loss: 5.3591 - val_acc: 0.6467 Epoch 91/120 50/50 [==============================] - 20s 403ms/step - loss: 0.3020 - acc: 0.8482 - val_loss: 5.6904 - val_acc: 0.6335 Epoch 92/120 50/50 [==============================] - 20s 409ms/step - loss: 0.2176 - acc: 0.9146 - val_loss: 5.6295 - val_acc: 0.6386 Epoch 93/120 50/50 [==============================] - 20s 399ms/step - loss: 0.3040 - acc: 0.8849 - val_loss: 5.7176 - val_acc: 0.6414 Epoch 94/120 50/50 [==============================] - 20s 400ms/step - loss: 0.2069 - acc: 0.9139 - val_loss: 5.2081 - val_acc: 0.6467 Epoch 95/120 50/50 [==============================] - 20s 390ms/step - loss: 0.3920 - acc: 0.8513 - val_loss: 5.5484 - val_acc: 0.6335 Epoch 96/120 50/50 [==============================] - 20s 398ms/step - loss: 0.2977 - acc: 0.8968 - val_loss: 5.5455 - val_acc: 0.6386 Epoch 97/120 50/50 [==============================] - 20s 403ms/step - loss: 0.2251 - acc: 0.8899 - val_loss: 5.5518 - val_acc: 0.6387 Epoch 98/120 50/50 [==============================] - 20s 399ms/step - loss: 0.2689 - acc: 0.8962 - val_loss: 5.6648 - val_acc: 0.6386 Epoch 99/120 50/50 [==============================] - 20s 398ms/step - loss: 0.2207 - acc: 0.9177 - val_loss: 5.4655 - val_acc: 0.6440 Epoch 100/120 50/50 [==============================] - 20s 399ms/step - loss: 0.2370 - acc: 0.9127 - val_loss: 5.5423 - val_acc: 0.6386 Epoch 101/120 50/50 [==============================] - 20s 399ms/step - loss: 0.2400 - acc: 0.8778 - val_loss: 5.6588 - val_acc: 0.6440 Epoch 102/120 50/50 [==============================] - 19s 389ms/step - loss: 0.2348 - acc: 0.8899 - val_loss: 5.4938 - val_acc: 0.6359 Epoch 103/120 50/50 [==============================] - 20s 399ms/step - loss: 0.3700 - acc: 0.8772 - val_loss: 5.6623 - val_acc: 0.6387 Epoch 104/120 50/50 [==============================] - 20s 399ms/step - loss: 0.3247 - acc: 0.8797 - val_loss: 5.6542 - val_acc: 0.6440 Epoch 105/120 50/50 [==============================] - 20s 398ms/step - loss: 0.2182 - acc: 0.8937 - val_loss: 5.8047 - val_acc: 0.6361 Epoch 106/120 50/50 [==============================] - 20s 401ms/step - loss: 0.1541 - acc: 0.9411 - val_loss: 5.6785 - val_acc: 0.6440 Epoch 107/120 50/50 [==============================] - 20s 408ms/step - loss: 0.3340 - acc: 0.8690 - val_loss: 5.5830 - val_acc: 0.6387 Epoch 108/120 50/50 [==============================] - 20s 400ms/step - loss: 0.1903 - acc: 0.9361 - val_loss: 5.7218 - val_acc: 0.6413 Epoch 109/120 50/50 [==============================] - 19s 389ms/step - loss: 0.2870 - acc: 0.9049 - val_loss: 5.4674 - val_acc: 0.6571 Epoch 110/120 50/50 [==============================] - 20s 397ms/step - loss: 0.2600 - acc: 0.8803 - val_loss: 5.5918 - val_acc: 0.6495 Epoch 111/120 50/50 [==============================] - 20s 400ms/step - loss: 0.3030 - acc: 0.8722 - val_loss: 5.6431 - val_acc: 0.6309 Epoch 112/120 50/50 [==============================] - 20s 396ms/step - loss: 0.2207 - acc: 0.8981 - val_loss: 5.6785 - val_acc: 0.6440 Epoch 113/120 50/50 [==============================] - 20s 397ms/step - loss: 0.2796 - acc: 0.8866 - val_loss: 5.7630 - val_acc: 0.6387 Epoch 114/120 50/50 [==============================] - 20s 399ms/step - loss: 0.3253 - acc: 0.8734 - val_loss: 5.6966 - val_acc: 0.6359 Epoch 115/120 50/50 [==============================] - 20s 402ms/step - loss: 0.2100 - acc: 0.8897 - val_loss: 5.6850 - val_acc: 0.6414 Epoch 116/120 50/50 [==============================] - 20s 392ms/step - loss: 0.2969 - acc: 0.9264 - val_loss: 5.5569 - val_acc: 0.6440 Epoch 117/120 50/50 [==============================] - 20s 400ms/step - loss: 0.3787 - acc: 0.8613 - val_loss: 5.5611 - val_acc: 0.6414 Epoch 118/120 50/50 [==============================] - 20s 398ms/step - loss: 0.1868 - acc: 0.9386 - val_loss: 5.8084 - val_acc: 0.6359 Epoch 119/120 50/50 [==============================] - 20s 398ms/step - loss: 0.2802 - acc: 0.8392 - val_loss: 5.7212 - val_acc: 0.6414 Epoch 120/120 50/50 [==============================] - 20s 397ms/step - loss: 0.2225 - acc: 0.8993 - val_loss: 5.7651 - val_acc: 0.6386 2402.2443401813507
#define the number of classes
NUM_CLASSES = 1
#Create a sequential model object
resnet50 = Sequential()
#Add resnet50 model architecture to the sequential module
resnet50.add(resnet50_x)
#Add a dropout later to reduce overfitting
resnet50.add(layers.Dropout(0.3))
#Flatten the output of resnet50 model
resnet50.add(layers.Flatten())
#Add another dropout later to further reduce overfitting
resnet50.add(layers.Dropout(0.5))
#Add a dense later with sigmoid activation
resnet50.add(layers.Dense(NUM_CLASSES, activation='sigmoid'))
#Set the weights of the ResNet50 model to be non-trainable
resnet50.layers[0].trainable = False
#Compile the model using binary cross-entropy loss, RMSprop optimizer, and accuracy metric
resnet50.compile(
loss='binary_crossentropy',
optimizer=RMSprop(lr=1e-4),
metrics=['accuracy']
)
#Compile the model using binary cross-entropy loss, Adam optimizer, and accuracy metric
resnet50.compile(loss='binary_crossentropy', optimizer=keras.optimizers.Adam(lr=0.0003, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False), metrics=["accuracy"])
#Print a summary of the model architecture and parameters
resnet50.summary()
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= resnet50 (Model) (None, 7, 7, 2048) 23587712 _________________________________________________________________ dropout_5 (Dropout) (None, 7, 7, 2048) 0 _________________________________________________________________ flatten_3 (Flatten) (None, 100352) 0 _________________________________________________________________ dropout_6 (Dropout) (None, 100352) 0 _________________________________________________________________ dense_3 (Dense) (None, 1) 100353 ================================================================= Total params: 23,688,065 Trainable params: 100,353 Non-trainable params: 23,587,712 _________________________________________________________________
#import time module
import time
#start the timer
start = time.time()
#Fit the resnet 50 model using generator for the training data
resnet50_history = resnet50.fit_generator(
train_generator,
steps_per_epoch=50,
epochs=120,
validation_data=validation_generator,
validation_steps=30,
)
#end timer
end = time.time()
#print time taken to train model
print(end - start)
Epoch 1/120 50/50 [==============================] - 27s 540ms/step - loss: 0.7552 - acc: 0.7296 - val_loss: 3.4884 - val_acc: 0.4783 Epoch 2/120 50/50 [==============================] - 19s 381ms/step - loss: 0.4173 - acc: 0.8427 - val_loss: 1.5260 - val_acc: 0.7068 Epoch 3/120 50/50 [==============================] - 20s 398ms/step - loss: 0.3058 - acc: 0.8868 - val_loss: 2.0526 - val_acc: 0.6549 Epoch 4/120 50/50 [==============================] - 20s 403ms/step - loss: 0.2072 - acc: 0.9214 - val_loss: 2.6460 - val_acc: 0.5759 Epoch 5/120 50/50 [==============================] - 20s 399ms/step - loss: 0.3174 - acc: 0.9189 - val_loss: 4.8668 - val_acc: 0.4402 Epoch 6/120 50/50 [==============================] - 20s 400ms/step - loss: 0.5920 - acc: 0.8728 - val_loss: 2.7379 - val_acc: 0.6178 Epoch 7/120 50/50 [==============================] - 20s 399ms/step - loss: 0.3312 - acc: 0.9233 - val_loss: 1.3630 - val_acc: 0.7228 Epoch 8/120 50/50 [==============================] - 20s 393ms/step - loss: 0.2693 - acc: 0.9099 - val_loss: 1.5208 - val_acc: 0.7382 Epoch 9/120 50/50 [==============================] - 20s 399ms/step - loss: 0.4593 - acc: 0.8715 - val_loss: 0.6576 - val_acc: 0.8804 Epoch 10/120 50/50 [==============================] - 20s 402ms/step - loss: 0.3462 - acc: 0.8910 - val_loss: 1.3985 - val_acc: 0.7199 Epoch 11/120 50/50 [==============================] - 20s 400ms/step - loss: 0.2023 - acc: 0.9524 - val_loss: 2.9074 - val_acc: 0.6495 Epoch 12/120 50/50 [==============================] - 20s 407ms/step - loss: 0.4504 - acc: 0.9206 - val_loss: 1.8812 - val_acc: 0.6859 Epoch 13/120 50/50 [==============================] - 20s 400ms/step - loss: 0.3441 - acc: 0.9150 - val_loss: 3.7129 - val_acc: 0.5951 Epoch 14/120 50/50 [==============================] - 20s 401ms/step - loss: 0.4501 - acc: 0.9037 - val_loss: 2.3975 - val_acc: 0.6675 Epoch 15/120 50/50 [==============================] - 20s 391ms/step - loss: 0.3064 - acc: 0.9212 - val_loss: 1.2267 - val_acc: 0.8370 Epoch 16/120 50/50 [==============================] - 20s 405ms/step - loss: 0.3762 - acc: 0.9194 - val_loss: 1.0708 - val_acc: 0.8194 Epoch 17/120 50/50 [==============================] - 21s 412ms/step - loss: 0.2232 - acc: 0.9340 - val_loss: 1.7459 - val_acc: 0.7826 Epoch 18/120 50/50 [==============================] - 20s 400ms/step - loss: 0.5058 - acc: 0.8745 - val_loss: 1.1144 - val_acc: 0.8429 Epoch 19/120 50/50 [==============================] - 20s 400ms/step - loss: 0.1653 - acc: 0.9636 - val_loss: 1.3451 - val_acc: 0.8016 Epoch 20/120 50/50 [==============================] - 20s 403ms/step - loss: 0.3069 - acc: 0.9484 - val_loss: 1.4899 - val_acc: 0.8037 Epoch 21/120 50/50 [==============================] - 20s 400ms/step - loss: 0.1563 - acc: 0.9680 - val_loss: 2.1059 - val_acc: 0.7745 Epoch 22/120 50/50 [==============================] - 20s 390ms/step - loss: 0.1538 - acc: 0.9693 - val_loss: 1.9515 - val_acc: 0.7984 Epoch 23/120 50/50 [==============================] - 20s 400ms/step - loss: 0.1665 - acc: 0.9528 - val_loss: 2.4425 - val_acc: 0.6793 Epoch 24/120 50/50 [==============================] - 20s 401ms/step - loss: 0.1295 - acc: 0.9649 - val_loss: 1.9518 - val_acc: 0.7592 Epoch 25/120 50/50 [==============================] - 20s 399ms/step - loss: 0.3407 - acc: 0.9231 - val_loss: 2.9447 - val_acc: 0.6957 Epoch 26/120 50/50 [==============================] - 20s 401ms/step - loss: 0.2749 - acc: 0.9384 - val_loss: 2.0990 - val_acc: 0.7225 Epoch 27/120 50/50 [==============================] - 20s 401ms/step - loss: 0.2638 - acc: 0.9434 - val_loss: 1.6793 - val_acc: 0.7799 Epoch 28/120 50/50 [==============================] - 20s 403ms/step - loss: 0.3112 - acc: 0.9434 - val_loss: 1.0412 - val_acc: 0.8796 Epoch 29/120 50/50 [==============================] - 20s 395ms/step - loss: 0.7560 - acc: 0.9123 - val_loss: 1.2181 - val_acc: 0.8614 Epoch 30/120 50/50 [==============================] - 20s 400ms/step - loss: 0.1806 - acc: 0.9465 - val_loss: 1.7942 - val_acc: 0.7565 Epoch 31/120 50/50 [==============================] - 20s 398ms/step - loss: 0.3080 - acc: 0.9572 - val_loss: 0.8905 - val_acc: 0.8859 Epoch 32/120 50/50 [==============================] - 21s 418ms/step - loss: 0.4820 - acc: 0.9004 - val_loss: 1.0836 - val_acc: 0.8848 Epoch 33/120 50/50 [==============================] - 20s 398ms/step - loss: 0.2943 - acc: 0.9415 - val_loss: 2.1548 - val_acc: 0.7391 Epoch 34/120 50/50 [==============================] - 20s 401ms/step - loss: 0.5427 - acc: 0.9244 - val_loss: 2.6586 - val_acc: 0.7042 Epoch 35/120 50/50 [==============================] - 20s 399ms/step - loss: 0.1468 - acc: 0.9699 - val_loss: 1.8550 - val_acc: 0.7908 Epoch 36/120 50/50 [==============================] - 20s 393ms/step - loss: 0.1424 - acc: 0.9755 - val_loss: 1.5140 - val_acc: 0.8246 Epoch 37/120 50/50 [==============================] - 20s 400ms/step - loss: 0.4138 - acc: 0.9288 - val_loss: 2.4082 - val_acc: 0.7391 Epoch 38/120 50/50 [==============================] - 20s 401ms/step - loss: 0.3164 - acc: 0.9440 - val_loss: 1.1473 - val_acc: 0.8796 Epoch 39/120 50/50 [==============================] - 20s 399ms/step - loss: 0.2076 - acc: 0.9472 - val_loss: 1.4537 - val_acc: 0.8424 Epoch 40/120 50/50 [==============================] - 20s 402ms/step - loss: 0.4577 - acc: 0.9294 - val_loss: 1.8448 - val_acc: 0.8246 Epoch 41/120 50/50 [==============================] - 20s 399ms/step - loss: 0.6614 - acc: 0.9238 - val_loss: 1.8285 - val_acc: 0.8152 Epoch 42/120 50/50 [==============================] - 20s 401ms/step - loss: 0.2263 - acc: 0.9332 - val_loss: 1.6573 - val_acc: 0.8220 Epoch 43/120 50/50 [==============================] - 20s 392ms/step - loss: 0.1752 - acc: 0.9565 - val_loss: 3.4285 - val_acc: 0.6005 Epoch 44/120 50/50 [==============================] - 20s 403ms/step - loss: 0.2817 - acc: 0.9359 - val_loss: 1.9231 - val_acc: 0.8010 Epoch 45/120 50/50 [==============================] - 20s 397ms/step - loss: 0.1353 - acc: 0.9774 - val_loss: 1.5370 - val_acc: 0.8179 Epoch 46/120 50/50 [==============================] - 20s 403ms/step - loss: 0.4295 - acc: 0.9572 - val_loss: 1.2789 - val_acc: 0.8377 Epoch 47/120 50/50 [==============================] - 21s 414ms/step - loss: 0.5267 - acc: 0.9300 - val_loss: 2.6555 - val_acc: 0.6821 Epoch 48/120 50/50 [==============================] - 20s 407ms/step - loss: 0.3134 - acc: 0.9307 - val_loss: 2.0917 - val_acc: 0.8141 Epoch 49/120 50/50 [==============================] - 20s 400ms/step - loss: 0.3810 - acc: 0.9421 - val_loss: 3.2587 - val_acc: 0.6685 Epoch 50/120 50/50 [==============================] - 20s 391ms/step - loss: 0.1716 - acc: 0.9528 - val_loss: 1.5568 - val_acc: 0.8455 Epoch 51/120 50/50 [==============================] - 20s 401ms/step - loss: 0.3371 - acc: 0.9256 - val_loss: 2.9138 - val_acc: 0.6984 Epoch 52/120 50/50 [==============================] - 20s 402ms/step - loss: 0.4246 - acc: 0.9269 - val_loss: 1.1428 - val_acc: 0.8822 Epoch 53/120 50/50 [==============================] - 20s 400ms/step - loss: 0.3190 - acc: 0.9528 - val_loss: 1.5136 - val_acc: 0.8179 Epoch 54/120 50/50 [==============================] - 20s 400ms/step - loss: 0.1724 - acc: 0.9730 - val_loss: 1.4298 - val_acc: 0.8220 Epoch 55/120 50/50 [==============================] - 20s 397ms/step - loss: 0.6054 - acc: 0.9332 - val_loss: 2.1137 - val_acc: 0.7935 Epoch 56/120 50/50 [==============================] - 20s 404ms/step - loss: 0.3492 - acc: 0.9421 - val_loss: 2.0164 - val_acc: 0.7618 Epoch 57/120 50/50 [==============================] - 19s 390ms/step - loss: 0.2019 - acc: 0.9413 - val_loss: 2.1647 - val_acc: 0.7772 Epoch 58/120 50/50 [==============================] - 20s 402ms/step - loss: 0.4908 - acc: 0.9382 - val_loss: 1.2995 - val_acc: 0.8586 Epoch 59/120 50/50 [==============================] - 20s 399ms/step - loss: 0.1181 - acc: 0.9774 - val_loss: 1.7610 - val_acc: 0.8424 Epoch 60/120 50/50 [==============================] - 20s 404ms/step - loss: 0.1946 - acc: 0.9572 - val_loss: 1.4560 - val_acc: 0.8455 Epoch 61/120 50/50 [==============================] - 20s 398ms/step - loss: 0.2227 - acc: 0.9528 - val_loss: 1.2832 - val_acc: 0.8152 Epoch 62/120 50/50 [==============================] - 20s 402ms/step - loss: 0.1444 - acc: 0.9743 - val_loss: 1.7223 - val_acc: 0.8377 Epoch 63/120 50/50 [==============================] - 21s 416ms/step - loss: 0.1263 - acc: 0.9756 - val_loss: 2.1361 - val_acc: 0.8016 Epoch 64/120 50/50 [==============================] - 20s 396ms/step - loss: 0.4809 - acc: 0.9357 - val_loss: 2.5345 - val_acc: 0.7382 Epoch 65/120 50/50 [==============================] - 20s 401ms/step - loss: 0.3820 - acc: 0.9344 - val_loss: 1.3146 - val_acc: 0.8397 Epoch 66/120 50/50 [==============================] - 20s 402ms/step - loss: 0.2104 - acc: 0.9363 - val_loss: 1.9159 - val_acc: 0.7775 Epoch 67/120 50/50 [==============================] - 20s 400ms/step - loss: 0.8957 - acc: 0.8964 - val_loss: 1.0899 - val_acc: 0.8995 Epoch 68/120 50/50 [==============================] - 20s 403ms/step - loss: 0.5553 - acc: 0.9472 - val_loss: 2.0033 - val_acc: 0.7827 Epoch 69/120 50/50 [==============================] - 20s 399ms/step - loss: 0.2312 - acc: 0.9584 - val_loss: 1.8276 - val_acc: 0.7962 Epoch 70/120 50/50 [==============================] - 20s 401ms/step - loss: 0.2968 - acc: 0.9597 - val_loss: 3.2622 - val_acc: 0.6832 Epoch 71/120 50/50 [==============================] - 20s 392ms/step - loss: 0.3980 - acc: 0.9647 - val_loss: 1.8584 - val_acc: 0.7962 Epoch 72/120 50/50 [==============================] - 20s 401ms/step - loss: 0.6529 - acc: 0.9256 - val_loss: 1.9297 - val_acc: 0.7984 Epoch 73/120 50/50 [==============================] - 20s 404ms/step - loss: 0.5227 - acc: 0.9325 - val_loss: 1.6571 - val_acc: 0.8016 Epoch 74/120 50/50 [==============================] - 20s 400ms/step - loss: 0.7504 - acc: 0.9432 - val_loss: 2.5409 - val_acc: 0.7618 Epoch 75/120 50/50 [==============================] - 20s 401ms/step - loss: 0.2676 - acc: 0.9540 - val_loss: 2.7344 - val_acc: 0.7582 Epoch 76/120 50/50 [==============================] - 20s 404ms/step - loss: 0.4176 - acc: 0.9275 - val_loss: 4.3984 - val_acc: 0.6047 Epoch 77/120 50/50 [==============================] - 20s 400ms/step - loss: 0.5286 - acc: 0.9465 - val_loss: 1.6492 - val_acc: 0.8370 Epoch 78/120 50/50 [==============================] - 20s 405ms/step - loss: 0.1205 - acc: 0.9824 - val_loss: 1.8200 - val_acc: 0.7827 Epoch 79/120 50/50 [==============================] - 20s 403ms/step - loss: 0.1693 - acc: 0.9528 - val_loss: 2.0680 - val_acc: 0.7772 Epoch 80/120 50/50 [==============================] - 20s 404ms/step - loss: 0.1657 - acc: 0.9749 - val_loss: 1.4603 - val_acc: 0.8403 Epoch 81/120 50/50 [==============================] - 20s 396ms/step - loss: 0.5586 - acc: 0.9357 - val_loss: 2.3228 - val_acc: 0.7772 Epoch 82/120 50/50 [==============================] - 20s 405ms/step - loss: 0.7048 - acc: 0.8852 - val_loss: 1.3139 - val_acc: 0.8770 Epoch 83/120 50/50 [==============================] - 20s 402ms/step - loss: 0.1675 - acc: 0.9768 - val_loss: 2.0945 - val_acc: 0.7826 Epoch 84/120 50/50 [==============================] - 20s 402ms/step - loss: 0.1708 - acc: 0.9566 - val_loss: 1.5811 - val_acc: 0.8403 Epoch 85/120 50/50 [==============================] - 20s 391ms/step - loss: 0.1805 - acc: 0.9559 - val_loss: 2.4566 - val_acc: 0.7772 Epoch 86/120 50/50 [==============================] - 20s 401ms/step - loss: 0.1767 - acc: 0.9749 - val_loss: 1.5822 - val_acc: 0.8351 Epoch 87/120 50/50 [==============================] - 20s 398ms/step - loss: 0.3007 - acc: 0.9426 - val_loss: 3.2866 - val_acc: 0.7255 Epoch 88/120 50/50 [==============================] - 20s 402ms/step - loss: 0.5262 - acc: 0.9307 - val_loss: 1.7417 - val_acc: 0.8429 Epoch 89/120 50/50 [==============================] - 20s 403ms/step - loss: 0.2703 - acc: 0.9236 - val_loss: 3.3720 - val_acc: 0.7174 Epoch 90/120 50/50 [==============================] - 20s 402ms/step - loss: 0.1393 - acc: 0.9743 - val_loss: 2.5065 - val_acc: 0.7644 Epoch 91/120 50/50 [==============================] - 20s 399ms/step - loss: 0.1071 - acc: 0.9837 - val_loss: 1.8325 - val_acc: 0.8125 Epoch 92/120 50/50 [==============================] - 20s 393ms/step - loss: 0.8093 - acc: 0.9242 - val_loss: 1.8764 - val_acc: 0.7984 Epoch 93/120 50/50 [==============================] - 20s 407ms/step - loss: 0.3870 - acc: 0.9338 - val_loss: 2.5679 - val_acc: 0.7826 Epoch 94/120 50/50 [==============================] - 20s 408ms/step - loss: 0.7019 - acc: 0.9294 - val_loss: 1.2821 - val_acc: 0.8586 Epoch 95/120 50/50 [==============================] - 20s 401ms/step - loss: 0.5001 - acc: 0.9161 - val_loss: 1.6783 - val_acc: 0.8587 Epoch 96/120 50/50 [==============================] - 20s 401ms/step - loss: 0.1356 - acc: 0.9774 - val_loss: 1.9563 - val_acc: 0.8037 Epoch 97/120 50/50 [==============================] - 20s 400ms/step - loss: 0.1011 - acc: 0.9837 - val_loss: 1.8003 - val_acc: 0.8614 Epoch 98/120 50/50 [==============================] - 20s 401ms/step - loss: 0.1931 - acc: 0.9591 - val_loss: 1.0143 - val_acc: 0.8639 Epoch 99/120 50/50 [==============================] - 20s 395ms/step - loss: 0.2636 - acc: 0.9534 - val_loss: 1.8141 - val_acc: 0.8342 Epoch 100/120 50/50 [==============================] - 20s 405ms/step - loss: 0.4763 - acc: 0.9591 - val_loss: 1.7331 - val_acc: 0.8613 Epoch 101/120 50/50 [==============================] - 20s 402ms/step - loss: 0.4645 - acc: 0.9407 - val_loss: 1.9227 - val_acc: 0.8587 Epoch 102/120 50/50 [==============================] - 20s 400ms/step - loss: 0.3351 - acc: 0.9419 - val_loss: 1.1206 - val_acc: 0.8770 Epoch 103/120 50/50 [==============================] - 20s 399ms/step - loss: 0.3889 - acc: 0.9413 - val_loss: 1.3640 - val_acc: 0.8641 Epoch 104/120 50/50 [==============================] - 20s 406ms/step - loss: 0.2712 - acc: 0.9578 - val_loss: 3.1699 - val_acc: 0.7592 Epoch 105/120 50/50 [==============================] - 20s 400ms/step - loss: 0.1753 - acc: 0.9628 - val_loss: 1.8292 - val_acc: 0.8424 Epoch 106/120 50/50 [==============================] - 20s 392ms/step - loss: 0.3822 - acc: 0.9616 - val_loss: 1.1680 - val_acc: 0.8586 Epoch 107/120 50/50 [==============================] - 20s 398ms/step - loss: 0.7981 - acc: 0.9363 - val_loss: 1.1733 - val_acc: 0.8614 Epoch 108/120 50/50 [==============================] - 21s 415ms/step - loss: 0.1644 - acc: 0.9768 - val_loss: 1.9287 - val_acc: 0.8429 Epoch 109/120 50/50 [==============================] - 20s 403ms/step - loss: 0.2400 - acc: 0.9584 - val_loss: 2.5409 - val_acc: 0.7772 Epoch 110/120 50/50 [==============================] - 20s 407ms/step - loss: 0.2132 - acc: 0.9578 - val_loss: 3.0584 - val_acc: 0.7618 Epoch 111/120 50/50 [==============================] - 20s 397ms/step - loss: 0.6707 - acc: 0.9357 - val_loss: 1.2343 - val_acc: 0.8641 Epoch 112/120 50/50 [==============================] - 20s 402ms/step - loss: 0.5155 - acc: 0.9572 - val_loss: 3.6222 - val_acc: 0.7251 Epoch 113/120 50/50 [==============================] - 20s 390ms/step - loss: 0.4852 - acc: 0.9300 - val_loss: 1.1733 - val_acc: 0.8533 Epoch 114/120 50/50 [==============================] - 20s 403ms/step - loss: 0.5709 - acc: 0.9092 - val_loss: 1.6209 - val_acc: 0.8403 Epoch 115/120 50/50 [==============================] - 20s 397ms/step - loss: 0.7178 - acc: 0.9394 - val_loss: 1.4134 - val_acc: 0.8614 Epoch 116/120 50/50 [==============================] - 20s 402ms/step - loss: 0.1316 - acc: 0.9787 - val_loss: 1.8731 - val_acc: 0.8246 Epoch 117/120 50/50 [==============================] - 20s 403ms/step - loss: 0.2104 - acc: 0.9419 - val_loss: 1.8435 - val_acc: 0.8342 Epoch 118/120 50/50 [==============================] - 20s 404ms/step - loss: 0.4751 - acc: 0.9578 - val_loss: 1.6844 - val_acc: 0.8403 Epoch 119/120 50/50 [==============================] - 20s 401ms/step - loss: 0.1211 - acc: 0.9831 - val_loss: 1.9069 - val_acc: 0.8587 Epoch 120/120 50/50 [==============================] - 20s 393ms/step - loss: 0.6273 - acc: 0.9432 - val_loss: 2.0415 - val_acc: 0.8194 2411.024879693985
#assign variables for the training history of the models
history_1= vgg16_history
history_2=inception_v3_history
history_3=resnet50_history
#Define a function to generate and save a plot of the training and validation loss and accuracy for a given model
def ModelGraphTrainngSummary(history,N,model_name):
# set the matplotlib backend so figures can be saved in the background
# plot the training loss and accuracy
import sys
import matplotlib
print("Generating plots...")
sys.stdout.flush()
matplotlib.use("Agg")
matplotlib.pyplot.style.use("ggplot")
matplotlib.pyplot.figure()
matplotlib.pyplot.plot(np.arange(0, N), history.history["loss"], label="train_loss")
matplotlib.pyplot.plot(np.arange(0, N), history.history["val_loss"], label="val_loss")
#matplotlib.pyplot.plot(np.arange(0, N), history.history["acc"], label="train_acc")
#matplotlib.pyplot.plot(np.arange(0, N), history.history["val_acc"], label="val_acc")
matplotlib.pyplot.title("Training Loss and Accuracy on Brain Tumor Classification")
matplotlib.pyplot.xlabel("Epoch #")
matplotlib.pyplot.ylabel("Loss/Accuracy of "+model_name)
matplotlib.pyplot.legend(loc="lower left")
matplotlib.pyplot.savefig("plot.png")
def ModelGraphTrainngSummaryAcc(history,N,model_name):
#Import the necessary libraries and print a message indicating that plots are being generated.
import sys
import matplotlib
print("Generating plots...")
sys.stdout.flush()
#Set the backend of matplotlib to 'Agg' so that the plots are saved in the background.
matplotlib.use("Agg")
matplotlib.pyplot.style.use("ggplot")
matplotlib.pyplot.figure()
matplotlib.pyplot.plot(np.arange(0, N), history.history["acc"], label="train_acc")
matplotlib.pyplot.plot(np.arange(0, N), history.history["val_acc"], label="val_acc")
#Plot the training accuracy and validation accuracy as a function of epoch, using the data from the history object.
matplotlib.pyplot.title("Training Loss and Accuracy on Brain Tumor Classification")
matplotlib.pyplot.xlabel("Epoch #")
matplotlib.pyplot.ylabel("Accuracy of "+ model_name)
matplotlib.pyplot.legend(loc="lower left")
matplotlib.pyplot.savefig("plot.png")
# iterate over a list of dictionaries where each dictionary represents a pre-trained model, its training history, and its name.
for x_model in [{'name':'VGG-16','history':history_1,'model':vgg16},
{'name':'Inception_v3','history':history_2,'model':inception_v3},
{'name':'Resnet','history':history_3,'model':resnet50}]:
ModelGraphTrainngSummary(x_model['history'],120,x_model['name'])
ModelGraphTrainngSummaryAcc(x_model['history'],120,x_model['name'])
#"predictions" variable stores the predicted class labels on the validation set by each pre-trained model.
predictions = x_model['model'].predict(X_val_prep)
predictions = [1 if x>0.5 else 0 for x in predictions]
#calculate the validation accuracy for each pre-trained model.
accuracy = accuracy_score(y_val, predictions)
print('Val Accuracy = %.2f' % accuracy)
#plot the confusion matrix for each pre-trained model.
confusion_mtx = confusion_matrix(y_val, predictions)
cm = plot_confusion_matrix(confusion_mtx, classes = list(labels.items()), normalize=False)
Generating plots... Generating plots... Val Accuracy = 0.90 Generating plots... Generating plots... Val Accuracy = 0.64 Generating plots... Generating plots... Val Accuracy = 0.86
In conclusion, the project aimed to explore the efficiency of different deep learning models in classifying brain tumours. Three transfer learning models, VGG16, InceptionV3, and ResNet50, were trained and compared with each other. The VGG16 model demonstrated the best performance, achieving high accuracy and validation scores. The other two transfer learning models approach showed comparatively lower performance. The results suggest that transfer learning is an effective way to improve deep learning model performance while reducing computation time.
# delete the dictionaries to clean up space
!rm -rf TRAIN TEST VAL TRAIN_CROP TEST_CROP VAL_CROP
# save the model
#save the trained models in current working directory
vgg16.save('VGG_model.h5')
inception_v3.save('inception_v3.h5')
resnet50.save('resnet50.h5')